# Multi-domain Q&A

Delta Pavonis Qwen 14B
Apache-2.0
Enhanced reasoning model based on Qwen-2.5 14B architecture, optimized for general reasoning and Q&A scenarios, supporting 128K context and 8K output
Large Language Model Transformers
D
prithivMLmods
547
3
Theta Lyrae Qwen 14B
Apache-2.0
Theta-Lyrae-Qwen-14B is a 14-billion-parameter model based on the Qwen 2.5 14B modal architecture, optimized for general reasoning and Q&A capabilities, excelling in context understanding, logical reasoning, and multi-step problem-solving.
Large Language Model Transformers
T
prithivMLmods
21
2
Deepseek R1 FP4
MIT
FP4 quantized version of DeepSeek R1 model, using optimized Transformer architecture for efficient text generation
Large Language Model
D
nvidia
61.51k
239
SILMA Kashif 2B Instruct V1.0
SILMA Kashif 2B Instruct v1.0 is an open-source model designed specifically for Arabic and English RAG (Retrieval Augmented Generation) tasks. It is built on Google Gemma and has the capabilities of entity extraction and multi-domain processing.
Large Language Model Transformers Supports Multiple Languages
S
silma-ai
3,432
17
Llama3.1 Korean V1.1 Sft By Aidx
Apache-2.0
A large Korean language model fine-tuned based on LlaMA3.1, adapted to Korean culture, and supporting Korean tasks in 53 domains
Large Language Model Safetensors Korean
L
SEOKDONG
1,242
10
Llama 3.1 Storm 8B
Llama-3.1-Storm-8B is a model developed based on Llama-3.1-8B-Instruct, aiming to improve the dialogue and function call capabilities of models with 8 billion parameters.
Large Language Model Transformers Supports Multiple Languages
L
akjindal53244
22.93k
176
Thespis Krangled 7b V2
A dialogue model trained on diverse datasets, supporting Chinese interaction, suitable for non-commercial scenarios
Large Language Model Transformers
T
cgato
20
1
Polish Reranker Base Mse
Apache-2.0
This is a Polish text ranking model trained using Mean Squared Error (MSE) distillation method, with a training dataset containing 1.4 million queries and 10 million document text pairs.
Text Embedding Transformers Other
P
sdadas
16
0
Polish Reranker Large Ranknet
Apache-2.0
This is a Polish text ranking model trained using the RankNet loss function, with a training dataset consisting of 1.4 million queries and 10 million document pairs.
Text Embedding Transformers Other
P
sdadas
337
2
Llama 160M Chat V1
Apache-2.0
This is a 160M-parameter Llama chat model, fine-tuned from JackFram/llama-160m, focusing on text generation tasks.
Large Language Model Transformers English
L
Felladrin
586
19
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase